-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Python: Add Google PaLM connector with chat completion and text embedding #2258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
github-merge-queue bot
pushed a commit
that referenced
this pull request
Aug 17, 2023
…le (#2076) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Implementation of Google PaLM connector with text completion and an example file to demonstrate its functionality. Closes #1979 ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Implemented Google Palm connector with text completion 2. Added example file to ```python/samples/kernel-syntax-examples``` 3. Added integration tests with different inputs to kernel.run_async 4. Added unit tests to ensure successful initialization of the class and successful API calls 5. 3 optional arguments (top_k, safety_settings, client) for google.generativeai.generate_text were not included. See more information about the function and its arguments: https://developers.generativeai.google/api/python/google/generativeai/generate_text I also opened a PR for text embedding and chat completion #2258 ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#dev-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone 😄 Currently no warnings, there was 1 warning when first installing genai with `poetry add google.generativeai==v0.1.0rc2` from within poetry shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked version. Reason for being yanked: Release is marked as supporting Py3.8, but in practice it requires 3.9". We would need to require later versions of python to fix it. --------- Co-authored-by: Abby Harrison <[email protected]> Co-authored-by: Abby Harrison <[email protected]>
…palm chat example
awharrison-28
approved these changes
Aug 21, 2023
mkarle
approved these changes
Aug 21, 2023
SOE-YoungS
pushed a commit
to SOE-YoungS/semantic-kernel
that referenced
this pull request
Nov 1, 2023
…le (microsoft#2076) ### Motivation and Context <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> Implementation of Google PaLM connector with text completion and an example file to demonstrate its functionality. Closes microsoft#1979 ### Description <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> 1. Implemented Google Palm connector with text completion 2. Added example file to ```python/samples/kernel-syntax-examples``` 3. Added integration tests with different inputs to kernel.run_async 4. Added unit tests to ensure successful initialization of the class and successful API calls 5. 3 optional arguments (top_k, safety_settings, client) for google.generativeai.generate_text were not included. See more information about the function and its arguments: https://developers.generativeai.google/api/python/google/generativeai/generate_text I also opened a PR for text embedding and chat completion microsoft#2258 ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#dev-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone 😄 Currently no warnings, there was 1 warning when first installing genai with `poetry add google.generativeai==v0.1.0rc2` from within poetry shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked version. Reason for being yanked: Release is marked as supporting Py3.8, but in practice it requires 3.9". We would need to require later versions of python to fix it. --------- Co-authored-by: Abby Harrison <[email protected]> Co-authored-by: Abby Harrison <[email protected]>
SOE-YoungS
pushed a commit
to SOE-YoungS/semantic-kernel
that referenced
this pull request
Nov 1, 2023
…ding (microsoft#2258) ### Motivation and Context Implementation of Google PaLM connector with chat completion, text embedding and three example files to demonstrate their functionality. Closes microsoft#2098 I also opened a PR for text completion microsoft#2076 <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description **What's new:** 1. Implemented Google Palm connector with chat completion and text embedding 2. Added 3 example files to `python/samples/kernel-syntax-examples` 3. The example files show PaLM chat in action with skills, memory, and just with user messages. The memory example uses PaLM embedding as well 5. Added integration tests to test chat with skills and embedding with kernel.memory functions 6. Added unit tests to verify successful class initialization and correct behavior from class functions **There are some important differences between google palm chat and open ai chat:** 1. PaLM has two functions for chatting, `chat` and `reply`. The chat function in google's genai library starts a new conversation, and reply continues the conversation. Reply is an attribute of the response object returned by chat. So an instance of the `GooglePalmChatCompletion` class needs a way to determine which function to use, which is why I introduced a private attribute to store the response object. See https://developers.generativeai.google/api/python/google/generativeai/types/ChatResponse 2. PaLM does not use system messages. Instead, the chat function takes a parameter called context. It serves the same purpose, to prime the assistant with certain behaviors and information. So when the user passes a system message to `complete_chat_async`, it is passed to `chat` as the context parameter. 3. Semantic memory works with the chat service as long as the user creates a chat prompt template. The prompt containing the memory needs to be added to the chat prompt template as a system message. See `python\samples\kernel-syntax-examples\google_palm_chat_with_memory.py` for more details. If the only purpose of `complete_async` in `GooglePalmChatCompletion` is to send memories + user messages to the chat service as a text prompt, then `complete_async` is not fulfilling its intended purpose. A possible solution would be to send the text prompt as a request to the text service within `complete_async`. <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [x] The code builds clean without any errors or warnings - [x] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [x] All unit tests pass, and I have added new tests where possible - [x] I didn't break anyone 😄 Currently no warnings, there was 1 warning when first installing genai with `poetry add google.generativeai==v0.1.0rc2` from within poetry shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked version. Reason for being yanked: Release is marked as supporting Py3.8, but in practice it requires 3.9". We would need to require later versions of python to fix it. --------- Co-authored-by: Abby Harrison <[email protected]> Co-authored-by: Abby Harrison <[email protected]> Co-authored-by: Abby Harrison <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation and Context
Implementation of Google PaLM connector with chat completion, text embedding and three example files to demonstrate their functionality.
Closes #2098
I also opened a PR for text completion #2076
Description
What's new:
python/samples/kernel-syntax-examples
There are some important differences between google palm chat and open ai chat:
chat
andreply
. The chat function in google's genai library starts a new conversation, and reply continues the conversation. Reply is an attribute of the response object returned by chat. So an instance of theGooglePalmChatCompletion
class needs a way to determine which function to use, which is why I introduced a private attribute to store the response object. See https://developers.generativeai.google/api/python/google/generativeai/types/ChatResponsecomplete_chat_async
, it is passed tochat
as the context parameter.python\samples\kernel-syntax-examples\google_palm_chat_with_memory.py
for more details. If the only purpose ofcomplete_async
inGooglePalmChatCompletion
is to send memories + user messages to the chat service as a text prompt, thencomplete_async
is not fulfilling its intended purpose. A possible solution would be to send the text prompt as a request to the text service withincomplete_async
.Contribution Checklist
Currently no warnings, there was 1 warning when first installing genai with
poetry add google.generativeai==v0.1.0rc2
from within poetry shell: "The locked version 0.1.0rc2 for google-generativeai is a yanked version. Reason for being yanked: Release is marked as supporting Py3.8, but in practice it requires 3.9". We would need to require later versions of python to fix it.